The proliferation of unmanned aircraft systems (UAS) has caused airspace regulation authorities to examine the interoperability of these aircraft with collision avoidance systems initially designed for large transport category aircraft. Limitations in the currently mandated TCAS led the Federal Aviation Administration to commission the development of a new solution, the Airborne Collision Avoidance System X (ACAS X), designed to enable a collision avoidance capability for multiple aircraft platforms, including UAS. While prior research explored using deep reinforcement learning algorithms (DRL) for collision avoidance, DRL did not perform as well as existing solutions. This work explores the benefits of using a DRL collision avoidance system whose parameters are tuned using a surrogate optimizer. We show the use of a surrogate optimizer leads to DRL approach that can increase safety and operational viability and support future capability development for UAS collision avoidance.
translated by 谷歌翻译
We introduce an end-to-end computational framework that enables hyperparameter optimization with the DeepHyper library, accelerated training, and interpretable AI inference with a suite of state-of-the-art AI models, including CGCNN, PhysNet, SchNet, MPNN, MPNN-transformer, and TorchMD-Net. We use these AI models and the benchmark QM9, hMOF, and MD17 datasets to showcase the prediction of user-specified materials properties in modern computing environments, and to demonstrate translational applications for the modeling of small molecules, crystals and metal organic frameworks with a unified, stand-alone framework. We deployed and tested this framework in the ThetaGPU supercomputer at the Argonne Leadership Computing Facility, and the Delta supercomputer at the National Center for Supercomputing Applications to provide researchers with modern tools to conduct accelerated AI-driven discovery in leadership class computing environments.
translated by 谷歌翻译
The most widely studied explainable AI (XAI) approaches are unsound. This is the case with well-known model-agnostic explanation approaches, and it is also the case with approaches based on saliency maps. One solution is to consider intrinsic interpretability, which does not exhibit the drawback of unsoundness. Unfortunately, intrinsic interpretability can display unwieldy explanation redundancy. Formal explainability represents the alternative to these non-rigorous approaches, with one example being PI-explanations. Unfortunately, PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size. Recently, it has been observed that the (absolute) rigor of PI-explanations can be traded off for a smaller explanation size, by computing the so-called relevant sets. Given some positive {\delta}, a set S of features is {\delta}-relevant if, when the features in S are fixed, the probability of getting the target class exceeds {\delta}. However, even for very simple classifiers, the complexity of computing relevant sets of features is prohibitive, with the decision problem being NPPP-complete for circuit-based classifiers. In contrast with earlier negative results, this paper investigates practical approaches for computing relevant sets for a number of widely used classifiers that include Decision Trees (DTs), Naive Bayes Classifiers (NBCs), and several families of classifiers obtained from propositional languages. Moreover, the paper shows that, in practice, and for these families of classifiers, relevant sets are easy to compute. Furthermore, the experiments confirm that succinct sets of relevant features can be obtained for the families of classifiers considered.
translated by 谷歌翻译
Meshing is a critical, but user-intensive process necessary for stable and accurate simulations in computational fluid dynamics (CFD). Mesh generation is often a bottleneck in CFD pipelines. Adaptive meshing techniques allow the mesh to be updated automatically to produce an accurate solution for the problem at hand. Existing classical techniques for adaptive meshing require either additional functionality out of solvers, many training simulations, or both. Current machine learning techniques often require substantial computational cost for training data generation, and are restricted in scope to the training data flow regime. MeshDQN is developed as a general purpose deep reinforcement learning framework to iteratively coarsen meshes while preserving target property calculation. A graph neural network based deep Q network is used to select mesh vertices for removal and solution interpolation is used to bypass expensive simulations at each step in the improvement process. MeshDQN requires a single simulation prior to mesh coarsening, while making no assumptions about flow regime, mesh type, or solver, only requiring the ability to modify meshes directly in a CFD pipeline. MeshDQN successfully improves meshes for two 2D airfoils.
translated by 谷歌翻译
We present a new convolution layer for deep learning architectures which we call QuadConv -- an approximation to continuous convolution via quadrature. Our operator is developed explicitly for use on unstructured data, and accomplishes this by learning a continuous kernel that can be sampled at arbitrary locations. In the setting of neural compression, we show that a QuadConv-based autoencoder, resulting in a Quadrature Convolutional Neural Network (QCNN), can match the performance of standard discrete convolutions on structured uniform data, as in CNNs, and maintain this accuracy on unstructured data.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
在本文中,我们提出了针对无人接地车辆(UGV)的新的控制屏障功能(CBF),该功能有助于避免与运动学(非零速度)障碍物发生冲突。尽管当前的CBF形式已经成功地保证了与静态障碍物的安全/碰撞避免安全性,但动态案例的扩展已获得有限的成功。此外,借助UGV模型,例如Unicycle或自行车,现有CBF的应用在控制方面是保守的,即在某些情况下不可能进行转向/推力控制。从经典的碰撞锥中汲取灵感来避免轨迹规划,我们介绍了其新颖的CBF配方,并具有对独轮车和自行车模型的安全性保证。主要思想是确保障碍物的速度W.R.T.车辆总是指向车辆。因此,我们构建了一个约束,该约束确保速度向量始终避开指向车辆的向量锥。这种新控制方法的功效在哥白尼移动机器人上进行了实验验证。我们将其进一步扩展到以自行车模型的形式扩展到自动驾驶汽车,并在Carla模拟器中的各种情况下证明了避免碰撞。
translated by 谷歌翻译
视频时间基础(VTG)的目标是根据自然语言(NL)描述在未修剪视频中定位时间矩。由于现实世界的应用程序提供了永无止境的视频流,因此它提出了对长形视频的时间基础的需求,这导致了两个主要挑战:(1)长视频长度使得很难处理整个视频而不减少样本速率并导致高计算负担; (2)随着候选时间的增加数量,准确的多模式对准更具挑战性。为了应对这些挑战,我们提出了一个有效的以窗户为中心的粗略对齐框架,它可以灵活地处理具有较高推理速度的长格式视频输入,并通过我们的新颖的Choce-Fine Muly-Fine增强了时间基础模态对齐框架。具体来说,我们通过滑动窗口方法将长视频将长视频切成候选窗口。 Cone(1)以窗户为中心,通过对比度学习和通过对NL查询相关的候选窗口进行过滤来学习窗口间的(粗粒)语义差异,并且(2)执行内部(罚款) - 使用强大的对比视力文本预训练模型的强大多模式对齐能力对候选力矩进行排名。长期视频的两个大规模VTG基准测试的广泛实验始终显示出可观的性能增长(MAD的3.13%至6.87%,从10.46%到EGO4D-NLQ上的10.46%至13.46%),并且Cone在两个数据集上都可以达到SOTA结果。分析揭示了组件的有效性和长期视频接地的效率较高,因为我们的系统在EGO4D-NLQ上提高了2倍的推理速度,而在MAD上提高了15倍的速度,同时保持了锥体的SOTA性能。
translated by 谷歌翻译
金属伪影校正是锥形束计算机断层扫描(CBCT)扫描中的一个具有挑战性的问题。插入解剖结构的金属植入物在重建图像中导致严重的伪影。广泛使用的基于介入的金属伪像减少(MAR)方法需要对投影中的金属痕迹进行分割,这是一项艰巨的任务。一种方法是使用深度学习方法来细分投影中的金属。但是,深度学习方法的成功受到现实培训数据的可用性的限制。由于植入物边界和大量预测,获得可靠的地面真相注释是充满挑战和耗时的。我们建议使用X射线模拟从临床CBCT扫描中生成合成金属分割训练数据集。我们比较具有不同数量的光子的仿真效果,还比较了几种培训策略以增加可用数据。我们将模型在真实临床扫描中的性能与常规阈值MAR和最近的深度学习方法进行比较。我们表明,具有相对较少光子的模拟适用于金属分割任务,并且用全尺寸和裁剪的投影训练深度学习模型共同提高了模型的鲁棒性。我们显示出受严重运动,体素尺寸下采样和落水量金属影响的图像质量的显着改善。我们的方法可以轻松地在现有的基于投影的MAR管道中实现,以提高图像质量。该方法可以为准确分割CBCT投影中的金属提供新的范式。
translated by 谷歌翻译
我们提出了Tile2tile,这是一种基于瓷砖平台游戏级别之间的样式转移方法。我们的方法涉及培训模型,这些模型将基于瓷砖提供的低分辨率草图表示的水平转化为给定游戏的原始瓷砖表示。这使这些模型(我们称为过滤器)可以将级别的草图转换为特定游戏的样式。此外,通过将一个游戏的水平转换为草图形式,然后将结果草图转换为另一个游戏的瓷砖,我们获得了两种游戏之间的样式传输方法。我们使用Markov随机字段和自动编码器来学习游戏过滤器,并将其应用于Super Mario Bros,Kid Icarus,Mega Man和Metroid之间的样式转移。
translated by 谷歌翻译